Growing demand on dark web for AI abuse images

New study found a clear desire among online offenders to learn new technologies

There is clear evidence of a growing demand for AI-generated images of child sexual abuse on the dark web, according to a new research report published by Anglia Ruskin University’s International Policing and Public Protection Research Institute (IPPPRI).

The innovative study of the ‘dark web’ seeks to understand how online offenders are using artificial intelligence (AI) to create child sexual abuse material (CSAM).

The ‘IPPPRI Insights’ publication comes after the Internet Watch Foundation shared a report highlighting the continued growth of this emerging technology as a tool to exploit children.

Researchers Dr Deanna Davy and Professor Sam Lundrigan analysed chats that had taken place in dark web forums over the past 12 months, and found clear evidence of the growing interest in this technology, the continued use of AI, and the collective desire on the part of online offenders for others to learn more and create new abuse imagery.  

Dr Davy explained:

“We know without doubt that the AI-produced child sexual abuse material is a rapidly growing problem, but we need to understand a great deal more about precisely how offenders are creating it, how widely it is being shared, and the impact it is having on offender pathways.  Obtaining this understanding will ultimately help us to prevent and combat these types of offences.”

The research, part of a wider programme of work into tech-facilitation of child sexual abuse funded by the Dawes Trust, found that forum members are actively teaching themselves how to create AI-generate CSAM via accessing guides and videos online, and sharing advice and guidance amongst themselves.

Analysis also showed that forum members are using their own supply of non-AI generated images and videos already at their disposal to facilitate their learning, and that many shared their hopes and expectations that the technology would evolve, making it even easier for them to create this material. Some forum members also referred to those creating the AI-imagery as “artists”.

Dr Davy concluded:

“We know that one of the many significant risks associated with using AI to create this type of material is the huge challenge it poses for law enforcement agencies tasked with understanding, coding and responding to the material they find. It is critical that our police and public protection agencies seek to better understand this, so that they can effectively respond to this technological change which shows no sign of slowing down.

“There is a misconception that AI-generated images are ‘victimless’ and this could not be further from the truth. We found that many of the offenders are sourcing images of children in order to manipulate them, and that the desire for ‘hardcore’ imagery, escalating from ‘softcore’ is regularly discussed."

Professor Lundrigan added:

“The conversations we analysed show that through the proliferation of advice and guidance on how to use AI in this way, this type of child abuse material is escalating and offending is increasing. This adds to the growing global threat of online child abuse in all forms, and must be viewed as a critical area to address in our response to this type of crime.”

The full report can be viewed here.